Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38709607

RESUMEN

Activation functions have a significant effect on the dynamics of neural networks (NNs). This study proposes new nonmonotonic wave-type activation functions and examines the complete stability of delayed recurrent NNs (DRNNs) with these activation functions. Using the geometrical properties of the wave-type activation function and subsequent iteration scheme, sufficient conditions are provided to ensure that a DRNN with n neurons has exactly (2m + 3)n equilibria, where (m + 2)n equilibria are locally exponentially stable, the remainder (2m + 3)n - (m + 2)n equilibria are unstable, and a positive integer m is related to wave-type activation functions. Furthermore, the DRNN with the proposed activation function is completely stable. Compared with the previous literature, the total number of equilibria and the stable equilibria significantly increase, thereby enhancing the memory storage capacity of DRNN. Finally, several examples are presented to demonstrate our proposed results.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38619952

RESUMEN

Most operant conditioning circuits predominantly focus on simple feedback process, few studies consider the intricacies of feedback outcomes and the uncertainty of feedback time. This paper proposes a neuromorphic circuit based on operant conditioning with addictiveness and time memory for automatic learning. The circuit is mainly composed of hunger output module, neuron module, excitement output module, memristor-based decision module, and memory and feedback generation module. In the circuit, the process of output excitement and addiction in stochastic feedback is achieved. The memory of interval between the two rewards is formed. The circuit can adapt to complex scenarios with these functions. In addition, hunger and satiety are introduced to realize the interaction between biological behavior and exploration desire, which enables the circuit to continuously reshape its memories and actions. The process of operant conditioning theory for automatic learning is accomplished. The study of operant conditioning can serve as a reference for more intelligent brain-inspired neural systems.

3.
Neural Netw ; 175: 106295, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38614023

RESUMEN

Multi-view unsupervised feature selection (MUFS) is an efficient approach for dimensional reduction of heterogeneous data. However, existing MUFS approaches mostly assign the samples the same weight, thus the diversity of samples is not utilized efficiently. Additionally, due to the presence of various regularizations, the resulting MUFS problems are often non-convex, making it difficult to find the optimal solutions. To address this issue, a novel MUFS method named Self-paced Regularized Adaptive Multi-view Unsupervised Feature Selection (SPAMUFS) is proposed. Specifically, the proposed approach firstly trains the MUFS model with simple samples, and gradually learns complex samples by using self-paced regularizer. l2,p-norm (0

Asunto(s)
Algoritmos , Aprendizaje Automático no Supervisado , Humanos , Redes Neurales de la Computación
4.
Neural Netw ; 175: 106312, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38642415

RESUMEN

In recent years, there has been a significant advancement in memristor-based neural networks, positioning them as a pivotal processing-in-memory deployment architecture for a wide array of deep learning applications. Within this realm of progress, the emerging parallel analog memristive platforms are prominent for their ability to generate multiple feature maps in a single processing cycle. However, a notable limitation is that they are specifically tailored for neural networks with fixed structures. As an orthogonal direction, recent research reveals that neural architecture should be specialized for tasks and deployment platforms. Building upon this, the neural architecture search (NAS) methods effectively explore promising architectures in a large design space. However, these NAS-based architectures are generally heterogeneous and diversified, making it challenging for deployment on current single-prototype, customized, parallel analog memristive hardware circuits. Therefore, investigating memristive analog deployment that overrides the full search space is a promising and challenging problem. Inspired by this, and beginning with the DARTS search space, we study the memristive hardware design of primitive operations and propose the memristive all-inclusive hypernetwork that covers 2×1025 network architectures. Our computational simulation results on 3 representative architectures (DARTS-V1, DARTS-V2, PDARTS) show that our memristive all-inclusive hypernetwork achieves promising results on the CIFAR10 dataset (89.2% of PDARTS with 8-bit quantization precision), and is compatible with all architectures in the DARTS full-space. The hardware performance simulation indicates that the memristive all-inclusive hypernetwork costs slightly more resource consumption (nearly the same in power, 22%∼25% increase in Latency, 1.5× in Area) relative to the individual deployment, which is reasonable and may reach a tolerable trade-off deployment scheme for industrial scenarios.


Asunto(s)
Redes Neurales de la Computación , Simulación por Computador , Aprendizaje Profundo , Algoritmos
5.
Cogn Neurodyn ; 18(1): 233-245, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38406206

RESUMEN

The human brain's ultra-low power consumption and highly parallel computational capabilities can be accomplished by memristor-based convolutional neural networks. However, with the rapid development of memristor-based convolutional neural networks in various fields, more complex applications and heavier computations lead to the need for a large number of memristors, which makes power consumption increase significantly and the network model larger. To mitigate this problem, this paper proposes an SBT-memristor-based convolutional neural network architecture and a hybrid optimization method combining pruning and quantization. Firstly, SBT-memristor-based convolutional neural network is constructed by using the good thresholding property of the SBT memristor. The memristive in-memory computing unit, activation unit and max-pooling unit are designed. Then, the hybrid optimization method combining pruning and quantization is used to improve the SBT-memristor-based convolutional neural network architecture. This hybrid method can simplify the memristor-based neural network and represent the weights at the memristive synapses better. Finally, the results show that the SBT-memristor-based convolutional neural network reduces a large number of memristors, decreases the power consumption and compresses the network model at the expense of a little precision loss. The SBT-memristor-based convolutional neural network obtains faster recognition speed and lower power consumption in MNIST recognition. It provides new insights for the complex application of convolutional neural networks.

6.
IEEE Trans Cybern ; PP2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38381634

RESUMEN

By using the fault-tolerant control method, the synchronization of memristive neural networks (MNNs) subjected to multiple actuator failures is investigated in this article. The considered actuator failures include the effectiveness failure and the lock-in-place failure, which are different from previous results. First of all, the mathematical expression of the control inputs in the considered system is given by introducing the models of the above two types of actuator failures. Following, two classes of synchronization strategies, which are state feedback control strategies and event-triggered control strategies, are proposed by using some inequality techniques and Lyapunov stability theories. The designed controllers can, respectively, guarantee the realization of synchronizations of the global exponential, the finite-time and the fixed-time for the MNNs by selecting different parameter conditions. Then the estimations of settling times of provided synchronization schemes are computed and the Zeno phenomenon of proposed event-triggered strategies is explicitly excluded. Finally, two experiments are conducted to confirm the availability of given synchronization strategies.

7.
Neural Netw ; 172: 106089, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38181617

RESUMEN

This paper studies the fixed-time synchronization (FDTS) of complex-valued neural networks (CVNNs) based on quantized intermittent control (QIC) and applies it to image protection and 3D point cloud information protection. A new controller was designed which achieved FDTS of the CVNNs, with the estimation of the convergence time not dependent on the initial state. Our approach divides the neural network into two real-valued systems and then combines the framework of the Lyapunov method to give criteria for FDTS. Applying synchronization to image protection, the image will be encrypted with a drive system sequence and decrypted with a response system sequence. The quality of image encryption and decryption depends on the synchronization error. Meanwhile, the depth image of the object is encrypted and then the 3D point cloud is reconstructed based on the decrypted depth image. This means that the 3D point cloud information is protected. Finally, simulation examples verify the efficacy of the controller and the synchronization criterion, giving results for applications in image protection and 3D point cloud information protection.


Asunto(s)
Redes Neurales de la Computación , Factores de Tiempo , Simulación por Computador
8.
Neural Netw ; 169: 32-43, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37857171

RESUMEN

Currently, through proposing discontinuous control strategies with the signum function and discussing separately short-term memory (STM) and long-term memory (LTM) of competitive artificial neural networks (ANNs), the fixed-time (FXT) synchronization of competitive ANNs has been explored. Note that the method of separate analysis usually leads to complicated theoretical derivation and synchronization conditions, and the signum function inevitably causes the chattering to reduce the performance of the control schemes. To try to solve these challenging problems, the FXT synchronization issue is concerned in this paper for competitive ANNs by establishing a theorem of FXT stability with switching type and developing continuous control schemes based on a kind of saturation functions. Firstly, different from the traditional method of studying separately STM and LTM of competitive ANNs, the models of STM and LTM are compressed into a high-dimensional system so as to reduce the complexity of theoretical analysis. Additionally, as an important theoretical preliminary, a FXT stability theorem with switching differential conditions is established and some high-precision estimates for the convergence time are explicitly presented by means of several special functions. To achieve FXT synchronization of the addressed competitive ANNs, a type of continuous pure power-law control scheme is developed via introducing the saturation function instead of the signum function, and some synchronization criteria are further derived by the established FXT stability theorem. These theoretical results are further illustrated lastly via a numerical example and are applied to image encryption.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Factores de Tiempo
9.
IEEE Trans Cybern ; 54(5): 3327-3337, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38051607

RESUMEN

This article concentrates on solving the k -winners-take-all (k WTA) problem with large-scale inputs in a distributed setting. We propose a multiagent system with a relatively simple structure, in which each agent is equipped with a 1-D system and interacts with others via binary consensus protocols. That is, only the signs of the relative state information between neighbors are required. By virtue of differential inclusion theory, we prove that the system converges from arbitrary initial states. In addition, we derive the convergence rate as O(1/t) . Furthermore, in comparison to the existing models, we introduce a novel comparison filter to eliminate the resolution ratio requirement on the input signal, that is, the difference between the k th and (k+1) th largest inputs must be larger than a positive threshold. As a result, the proposed distributed k WTA model is capable of solving the k WTA problem, even when more than two elements of the input signal share the same value. Finally, we validate the effectiveness of the theoretical results through two simulation examples.

10.
Artículo en Inglés | MEDLINE | ID: mdl-37819823

RESUMEN

This article is devoted to analyzing the multistability and robustness of competitive neural networks (NNs) with time-varying delays. Based on the geometrical structure of activation functions, some sufficient conditions are proposed to ascertain the coexistence of ∏i=1n(2Ri+1) equilibrium points, ∏i=1n(Ri+1) of them are locally exponentially stable, where n represents a dimension of system and Ri is the parameter related to activation functions. The derived stability results not only involve exponential stability but also include power stability and logarithmical stability. In addition, the robustness of ∏i=1n(Ri+1) stable equilibrium points is discussed in the presence of perturbations. Compared with previous papers, the conclusions proposed in this article are easy to verify and enrich the existing stability theories of competitive NNs. Finally, numerical examples are provided to support theoretical results.

11.
Neural Netw ; 166: 459-470, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37574620

RESUMEN

In this paper, the theoretical analysis on exponential synchronization of a class of coupled switched neural networks suffering from stochastic disturbances and impulses is presented. A control law is developed and two sets of sufficient conditions are derived for the synchronization of coupled switched neural networks. First, for desynchronizing stochastic impulses, the synchronization of coupled switched neural networks is analyzed by Lyapunov function method, the comparison principle and a impulsive delay differential inequality. Then, for general stochastic impulses, by partitioning impulse interval and using the convex combination technique, a set of sufficient condition on the basis of linear matrix inequalities (LMIs) is derived for the synchronization of coupled switched neural networks. Eventually, two numerical examples and a practical application are elaborated to illustrate the effectiveness of the theoretical results.


Asunto(s)
Redes Neurales de la Computación , Factores de Tiempo
12.
Artículo en Inglés | MEDLINE | ID: mdl-37071512

RESUMEN

The sparse representation of graphs has shown great potential for accelerating the computation of graph applications (e.g., social networks and knowledge graphs) on traditional computing architectures (CPU, GPU, or TPU). But, the exploration of large-scale sparse graph computing on processing-in-memory (PIM) platforms (typically with memristive crossbars) is still in its infancy. To implement the computation or storage of large-scale or batch graphs on memristive crossbars, a natural assumption is that a large-scale crossbar is demanded, but with low utilization. Some recent works question this assumption; to avoid the waste of storage and computational resource, the fixed-size or progressively scheduled "block partition" schemes are proposed. However, these methods are coarse-grained or static and are not effectively sparsity-aware. This work proposes the dynamic sparsity-aware mapping scheme generating method that models the problem with a sequential decision-making model, and optimizes it by reinforcement learning (RL) algorithm (REINFORCE). Our generating model long short-term memory (LSTM), combined with the dynamic-fill scheme generates remarkable mapping performance on the small-scale graph/matrix data (complete mapping costs 43% area of the original matrix) and two large-scale matrix data (costing 22.5% area on qh882 and 17.1% area on qh1484). Our method may be extended to sparse graph computing on other PIM architectures, not limited to the memristive device-based platforms.

13.
Neural Netw ; 163: 53-63, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37028154

RESUMEN

The synchronization problem of bidirectional associative memory memristive neural networks (BAMMNNs) with time-varying delays plays an essential role in the implementation and application of neural networks. Firstly, under the framework of the Filippov's solution, the discontinuous parameters of the state-dependent switching are transformed by convex analysis method, which is different from most previous approaches. Secondly, based on Lyapunov function and some inequality techniques, several conditions for the fixed-time synchronization (FXTS) of the drive-response systems are obtained by designing special control strategies. Moreover, the settling time (ST) is estimated by the improved fixed-time stability lemma. Thirdly, the driven-response BAMMNNs are investigated to be synchronized within a prescribed time by designing new controllers based on the FXTS results, where ST is irrelevant to the initial values of BAMMNNs and the parameters of controllers. Finally, a numerical simulation is exhibited to verify the correctness of the conclusions.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Factores de Tiempo , Simulación por Computador
14.
Artículo en Inglés | MEDLINE | ID: mdl-37018578

RESUMEN

This article investigates a generalized type of multistability about almost periodic solutions for memristive Cohen-Grossberg neural networks (MCGNNs). As the inevitable disturbances in biological neurons, almost periodic solutions are more common in nature than equilibrium points (EPs). They are also generalizations of EPs in mathematics. According to the concepts of almost periodic solutions and Ψ -type stability, this article presents a generalized-type multistability definition of almost periodic solutions. The results show that (K+1)n generalized stable almost periodic solutions can coexist in a MCGNN with n neurons, where K is a parameter of the activation functions. The enlarged attraction basins are also estimated based on the original state space partition method. Some comparisons and convincing simulations are given to verify the theoretical results at the end of this article.

15.
IEEE Trans Cybern ; 53(10): 6549-6561, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37015518

RESUMEN

This article focuses on the robust H∞ synchronization of two types of coupled reaction-diffusion neural networks with multiple state and spatial diffusion couplings by utilizing pinning adaptive control strategies. First, based on the Lyapunov functional combined with inequality techniques, several sufficient conditions are formulated to ensure H∞ synchronization for these two networks with parameter uncertainties. Moreover, node-based pinning adaptive control strategies are devised to address the robust H∞ synchronization problem. In addition, some criteria of H∞ synchronization for these two networks under parameter uncertainties are developed via edge-based pinning adaptive controllers. Finally, two numerical examples are presented to verify our results.

16.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10358-10375, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37030840

RESUMEN

Human tends to locate the facial landmarks with heavy occlusion by their relative position to the easily identified landmarks. The clue is defined as the landmark inherent relation while it is ignored by most existing methods. In this paper, we present Dynamic Sparse Local Patch Transformer (DSLPT), a novel face alignment framework for the inherent relation learning and uncertainty estimation. Unlike most existing methods that regress facial landmarks directly from global features, the DSLPT first generates a rough representation of each landmark from a local patch cropped from the feature map and then adaptively aggregates them by a case dependent inherent relation. Finally, the DSLPT predicts the coordinate and uncertainty of each landmark by regressing their probability distribution from the output features. Moreover, we introduce a coarse-to-fine framework to incorporate with DSLPT for an improved result. In the framework, the position and size of each patch are determined by the probability distribution of the corresponding landmark predicted in the previous stage. The dynamic patches will ensure a fine-grained landmark representation for inherent relation learning so that a rough prediction result can gradually converge to the target facial landmarks. We integrate the coarse-to-fine model into an end-to-end training pipeline and carry out experiments on the mainstream benchmarks. The results demonstrate that the DSLPT achieves state-of-the-art performance with much less computational complexity. The codes and models are available at https://github.com/Jiahao-UTS/DSLPT.


Asunto(s)
Algoritmos , Cara , Humanos , Incertidumbre
17.
Artículo en Inglés | MEDLINE | ID: mdl-37030854

RESUMEN

With the rapid progress of deep neural network (DNN) applications on memristive platforms, there has been a growing interest in the acceleration and compression of memristive networks. As an emerging model optimization technique for memristive platforms, bit-level sparsity training (with the fixed-point quantization) can significantly reduce the demand for analog-to-digital converters (ADCs) resolution, which is critical for energy and area consumption. However, the bit sparsity and the fixed-point quantization will inevitably lead to a large performance loss. Different from the existing training and optimization techniques, this work attempts to explore more sparsity-tolerant architectures to compensate for performance degradation. We first empirically demonstrate that in a certain search space (e.g., 4-bit quantized DARTS space), network architectures differ in bit-level sparsity tolerance. It is reasonable and necessary to search the architectures for efficient deployment on memristive platforms by the neural architecture search (NAS) technology. We further introduce bit-level sparsity-tolerant NAS (BST-NAS), which encapsulates low-precision quantization and bit-level sparsity training into the differentiable NAS, to explore the optimal bit-level sparsity-tolerant architectures. Experimentally, with the same degree of sparsity and experiment settings, our searched architectures obtain a promising performance, which outperform the normal NAS-based DARTS-series architectures (about 5.8% higher than that of DARTS-V2 and 2.7% higher than that of PC-DARTS) on CIFAR10.

18.
Neural Netw ; 163: 28-39, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37023543

RESUMEN

This paper addresses fixed-time output synchronization problems for two types of complex dynamical networks with multi-weights (CDNMWs) by using two types of adaptive control methods. Firstly, complex dynamical networks with multiple state and output couplings are respectively presented. Secondly, several fixed-time output synchronization criteria for these two networks are formulated based on Lyapunov functional and inequality techniques. Thirdly, by employing two types of adaptive control methods, fixed-time output synchronization issues of these two networks are dealt with. At last, the analytical results are verified by two numerical simulations.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Factores de Tiempo
19.
Neural Netw ; 162: 175-185, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36907007

RESUMEN

This paper studies the global Mittag-Leffler (M-L) stability problem for fractional-order quaternion-valued memristive neural networks (FQVMNNs) with generalized piecewise constant argument (GPCA). First, a novel lemma is established, which is used to investigate the dynamic behaviors of quaternion-valued memristive neural networks (QVMNNs). Second, by using the theories of differential inclusion, set-valued mapping, and Banach fixed point, several sufficient criteria are derived to ensure the existence and uniqueness (EU) of the solution and equilibrium point for the associated systems. Then, by constructing Lyapunov functions and employing some inequality techniques, a set of criteria are proposed to ensure the global M-L stability of the considered systems. The obtained results in this paper not only extends previous works, but also provides new algebraic criteria with a larger feasible range. Finally, two numerical examples are introduced to illustrate the effectiveness of the obtained results.

20.
Neural Netw ; 157: 11-25, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36306656

RESUMEN

This paper presents theoretical results on multiple asymptotical ω-periodicity of a state-dependent switching fractional-order neural network with time delays and sigmoidal activation functions. Firstly, by combining the geometrical properties of activation functions with the range of switching threshold, a partition of state space is given. Then, the conditions guaranteeing that the solutions can approach each other infinitely in each positive invariant set are derived. Furthermore, the S-asymptotical ω-periodicity and the convergence of solutions in positive invariant sets are discussed. It is worth noting that the number of attractors increases to 3n from 2n in a neural network without switching. Finally, three numerical examples are given to substantiate the theoretical results.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Periodicidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...